48 research outputs found

    Multi-Core Parallel Routing

    Get PDF
    The recent increase in the amount of data (i.e., big data) led to higher data volumes to be transferred and processed over the network. Also, over the last years, the deployment of multi-core routers has grown rapidly. However, such big data transfers are not leveraging the powerful multi-core routers to the extent possible, particularly in the key function of routing. Our main goal is to find a way so we can use these cores more effectively and efficiently in routing the big data transfers. In this dissertation, we propose a novel approach to parallelize data transfers by leveraging the multi-core CPUs in the routers. Legacy routing protocols, e.g. OSPF for intra-domain routing, send data from source to destination on a shortest single path. We describe an end-to-end method to distribute data optimally on flows by using multiple paths. We generate new virtual topology substrates from the underlying router topology and perform shortest path routing on each substrate. With this framework, even though calculating shortest paths could be done with well-known techniques such as OSPF's Dijkstra implementation, finding optimal substrates so as to maximize the aggregate throughput over multiple end-to-end paths is still an NP-hard problem. We focus our efforts on solving the problem and design heuristics for substrate generation from a given router topology. Our heuristics' interim goal is to generate substrates in such a way that the shortest path between a source-destination pair on each substrate minimally overlaps with each other. Once these substrates are determined, we assign each substrate to a core in routers and employ a multi-path transport protocol, like MPTCP, to perform end-to-end parallel transfers

    Atrioventricular Septal Defects Repair: Comparison of Classic Single Patch and Double-Patch Techniques

    Get PDF
    Objective: Different patch techniques were virtually always used in the surgery of pediatric patients with complete atrioventricular septal defects. In this study, we described our single center, single surgeon experiences and results about the classic single patch and double patch techniques to repair complete atrioventricular septal defects. Materials and Methods: This retrospective descriptive study included 30 patients who underwent intracardiac repair of complete atrioventricular septal defect in Ankara Bilkent City Hospital Department of Pediatric Cardiovascular Surgery. The study was conducted between February 2019 to December 2021. Patients in group S underwent surgery using the traditional single-patch method, while group D included patients who underwent repair using the double patch approach (n = 10). Patients’ demographic and clinical information was taken from institutional databases and medical records. Postoperative complications were recorded. Results: When the preoperative/postoperative insufficiency levels of the valves were compared with the Wilcoxon Signed rank test, the findings were not statistically significant for the left atrioventricular valves, but were statistically significant for the right atrioventricular valves. (p=0.02) When we compared postoperative valve regurgitation of both techniques with the Kruskall-Wallis test, no significant difference was found between postoperative valve regurgitation and function, independent of preoperative findings. Conclusion: Both operation techniques did not make a difference between operative or late mortality and morbidity. Depending on the surgeon’s experience, ventricular septal defect size does not play a restrictive role in the selection of the technique to be used. The single-patch and double patch method as described here is methodical, comprehensible, repeatable, and reasonably long-lasting

    Post-discharge heart failure monitoring program in Turkey: Hit-PoinT

    Get PDF
    Objective: The aim of this study was to assess the efficacy and feasibility of an enhanced heart failure (HF) education with a 6-month telephone follow- up program in post-discharge ambulatory HF patients. Methods: The Hit-Point trial was a multicenter, randomized, controlled trial of enhanced HF education with a 6-month telephone follow-up program (EHFP) vs routine care (RC) in patients with HF and reduced ejection fraction. A total of 248 patients from 10 centers in various geographical areas were randomized: 125 to EHFP and 123 to RC. Education included information on adherence to treatment, symptom recognition, diet and fluid intake, weight monitoring, activity and exercise training. Patients were contacted by telephone after 1, 3, and 6 months. The primary study endpoint was cardiovascular death. Results: Although all-cause mortality didn't differ between the EHFP and RC groups (p=NS), the percentage of cardiovascular deaths in the EHFP group was significantly lower than in the RC group at the 6-month follow up (5.6% vs. 8.9%, p=0.04). The median number of emergency room visits was one and the median number of all cause hospitalizations and heart failure hospitalizations were zero. Twenty-tree percent of the EHFP group and 35% of the RC group had more than a median number of emergency room visits (p=0.05). There was no significant difference regarding the median number of all-cause or heart failure hospitalizations. At baseline, 60% of patients in EHFP and 61% in RC were in NYHA Class III or IV, while at the 6-month follow up only 12% in EHFP and 32% in RC were in NYHA Class III or IV (p=0.001). Conclusion: These results demonstrate the potential clinical benefits of an enhanced HF education and follow up program led by a cardiologist in reducing cardiovascular deaths and number of emergency room visits with an improvement in functional capacity at 6 months in post-discharge ambulatory HF patients.Türk Kardiyoloji Derneği Kalp Yetmezliği Çalışma Grub

    Finding graph medians using graph embedding techniques

    No full text
    Graph (Çizge) Teorisinin birçok uygulaması görüntü işleme, bilgisayarla görü, biyoinformatik, veri madenciliği ve yapay zeka çalışmalarında sıklıkla kullanılmaktadır. Objelerin çizge gösterimiyle gösterilmesi ve resim eşleştirilmesi ise nesne tanıma çalışmalarında önemli bir yer kaplamaktadır. Birçok örnek resim verilerinde gürültü miktarı doğal koşullar neticesinde fazlaca bulunmaktadır ve elde edilen sonuçlarda da hata miktarı fazla çıkmaktadır. Her bir resmin çizge gösterimiyle gösterilmesi sonucu eşleştirme problemi özünde çizge karşılaştırma problemiyle eşdeğer bir hal almaktadır. Veri miktarının fazla olması ve çizge karşılaştırma işleminin maliyetli olması sebebiyle yakınsama algoritmaları (approximation algorithms) kullanılarak, yapılan işlemin maliyetini azaltma çalışmaları yapılmaktadır. Verilen çizgeleri karşılaştırmak için her grup çizge için, o grubu temsil eden yeni bir çizge hesaplanarak, aranan çizgeyi temsilci çizge ile karşılaştırmanın daha hızlı ve tolere edilebilecek seviyede gerçeğe yakın sonuçlar vereceği öngörülmüştür. Daha önce yapılan çalışmalarda, temel olarak, iki çizgenin karşılaştırılması için çizge uzayındaki her bir çizgeyi, temsil eden vektör uzayına dönüştürme ve daha az maliyetle vektör uzayında işlemler yapma üzerinde durulmuştur. Yapılan çalışmada, her veri sınıfı için veri kümemiz içindeki tüm çizgeler, veri kaybını minimum düzeyde tutacak şekilde, geometrik uzaya izomorfik olarak l_1 norm (Manhattan Uzaklık Metriği) altında dönüştürülmüştür. Bu sayede temsilci çizge bulunması esnasında yapılacak işlemlerin masrafları azaltılmış ve kullanmış olduğumuz Tırtıl Ayrışması (Caterpillar Decomposition) tekniğiyle minimum seviyede veri kaybı olması sağlanmıştır. Daha önce yapılan çalışmalardan farklı olarak, elde edilen geometrik uzayda her bir nokta, çizge üzerindeki bir düğüme (node) denk gelmektedir ve bu sayede veri kaybı azalmaktadır. Temsilci çizgenin belirlenmesi için, oluşturulan vektör uzayında, K-Means Algoritması kullanılmaktadır. Elde edilen temsilci nokta kümesi ile çizgeleri karşılaştırmak için Hausdorff Mesafesi algoritmasının 25 farklı hesaplama metriği kullanılmaktadır. Bu işleyiş yapısı içerisinde minimum seviyede hatayla daha hızlı sonuçlar elde etmek mümkün olmaktadır.Graph theory applications are frequently used in image processing, computer vision, bioinformatics, data mining and artificial intelligence related works. On the other hand, representing objects with graphs and matching the images are important part of object recognitions applications. In most of sample image data, images have a lot of noise caused by natural conditions and the error levels in the final results are high. When each image is represented by a graph, matching problem inherently turns to be the same as graph comparison. Since the amount of data is huge and the graph comparison is costly, approximation algorithms are used to decrease the total cost. To compare the given graphs, for each group of graphs, a representative is selected. It is anticipated that, comparing the graph only with the representative graphs gives faster results and the results are tolerably close to the exact truth. In the previous works, to compare the graphs, mapping of each graph in the graph space into a vector in the vector space and then working on the vector space with minimum cost is used. In this work, for each data class we have, each graph in the set is mapped to geometric space by using isomorphic l_1 norm (Manhattan Distance metric), so that, the data lose is minimum. Consequently, the cost of finding the representative graph is decreased and the data lose is kept at the minimum by using Caterpillar Decomposition technique. Differently from the previous works, each point in the geometric space represents a node in the graph. Thus, the data lose is lowered. In the vector space, K-Means Algorithm is used to find the representative graph. To compare the representative points set with the graphs, 25 different distance metric calculations of Hausdorff Distance Algorithm is used. This working mechanism makes it possible to have faster results with minimum error level

    Adaptive Fault Detection Scheme Using an Optimized Self-healing Ensemble Machine Learning Algorithm

    Get PDF
    This paper proposes a new cost-efficient, adaptive, and self-healing algorithm in real time that detects faults in a short period with high accuracy, even in the situations when it is difficult to detect. Rather than using traditional machine learning (ML) algorithms or hybrid signal processing techniques, a new framework based on an optimization enabled weighted ensemble method is developed that combines essential ML algorithms. In the proposed method, the system will select and compound appropriate ML algorithms based on Particle Swarm Optimization (PSO) weights. For this purpose, power system failures are simulated by using the PSCAD-Python co-simulation. One of the salient features of this study is that the proposed solution works on real-time raw data without using any pre-computational techniques or pre-stored information. Therefore, the proposed technique will be able to work on different systems, topologies, or data collections. The proposed fault detection technique is validated by using PSCAD-Python co-simulation on a modified and standard IEEE-14 and standard IEEE-39 bus considering network faults which are difficult to detect. 2015 CSEE.Scopu

    Multiple Graph Abstractions For Parallel Routing Over Virtual Topologies

    No full text
    High throughput data transfers across the Internet has become a challenge with deployment of data centers and cloud platforms. In this paper, we propose to utilize the cores of a router to build multiple abstractions of the underlying topology to parallelize end-to-end (e2e) streams for bulk data transfers. By abstracting a different graph for each core, we steer each core to calculate a different e2e path in parallel. The e2e transfers can use the shortest paths obtained from each subgraph to increase the total throughput over the underlying network. Even though calculating shortest paths is well optimized in legacy routing protocols (e.g., OSPF), finding optimal set of subgraphs to generate non-overlapping and effective multiple paths is a challenging problem. To this end, we analyze centrality metrics to eliminate potentially highest loaded routers or edges in the topology without coordination and eliminate them from the subgraphs. We evaluate the heuristics in terms of aggregate throughput and robustness against failures

    Is voiding cystourethrography necessary for evaluating unilateral ectopic pelvic kidney?

    No full text
    Objective: The aim of this study is to evaluate if voiding cystourethrography (VCUG) is necessary for the evaluation of unilateral ectopic pelvic kidney (UEPK) in order to identify vesicoureteral reflux (VUR)

    NATIONAL TRENDS IN THE USE OF MAGNETIC RESONANCE IMAGING OF BREAST IMAGING; A SURVEY: PROTOCOL MF 10-01

    No full text
    Purpose: Several guidelines have been published to standardize to use of breast MRI for breast diseases in developed countries. However, each country should consider their own infrastructure and create their guidelines. We aim in this study to evaluate current MR usage practice in Turkey

    Energy Trading on a Peer-to-Peer Basis between Virtual Power Plants Using Decentralized Finance Instruments

    No full text
    Over time, distribution systems have begun to include increased distributed energy resources (DERs) due to the advancement of auxiliary power electronics, information and communication technologies (ICT), and cost reductions. Electric vehicles (EVs) will undoubtedly join the energy community alongside DERs, and energy transfers from vehicles to grids and vice versa will become more extensive in the future. Virtual power plants (VPPs) will also play a key role in integrating these systems and participating in wholesale markets. Energy trading on a peer-to-peer (P2P) basis is a promising business model for transactive energy that aids in balancing local supply and demand. Moreover, a market scheme between VPPs can help DER owners make more profit while reducing renewable energy waste. For this purpose, an inter-VPP P2P trading scheme is proposed. The scheme utilizes cutting-edge technologies of the Avalanche blockchain platform, developed from scratch with decentralized finance (DeFi), decentralized applications (DApps), and Web3 workflows in mind. Avalanche is more scalable and has faster transaction finality than its layer-1 predecessors. It provides interoperability abilities among other common blockchain networks, facilitating inter-VPP P2P trading between different blockchain-based VPPs. The merits of DeFi contribute significantly to the workflow in this type of energy trading scenario, as the price mechanism can be determined using open market-like instruments. A detailed case study was used to examine the effectiveness of the proposed scheme and flow, and important conclusions were drawn
    corecore